Goto

Collaborating Authors

 decomposable penalty


0266e33d3f546cb5436a10798e657d97-Reviews.html

Neural Information Processing Systems

The paper provides a unified framework analysis on the model selection consistency with geometrically decomposable penalties. As special cases of this framework, it also derives the consistency results of some machine learning examples. This paper deals with very interesting topic in the sense that; while the model consistencies have been already derived'individually' by several works, there has been a recent trend to provide a unified framework on statistical guarantee for more different types of M-estimator for the future. Nevertheless, my major concern on this paper is its clarity; it is not clearly written, and explanation is somehow terse. They are briefly described only in Introduction, not in Section 2 where authors introduce a geometrically decomposability.


On model selection consistency of regularized M-estimators

Lee, Jason D., Sun, Yuekai, Taylor, Jonathan E.

arXiv.org Machine Learning

Regularized M-estimators are used in diverse areas of science and engineering to fit high-dimensional models with some low-dimensional structure. Usually the low-dimensional structure is encoded by the presence of the (unknown) parameters in some low-dimensional model subspace. In such settings, it is desirable for estimates of the model parameters to be \emph{model selection consistent}: the estimates also fall in the model subspace. We develop a general framework for establishing consistency and model selection consistency of regularized M-estimators and show how it applies to some special cases of interest in statistical learning. Our analysis identifies two key properties of regularized M-estimators, referred to as geometric decomposability and irrepresentability, that ensure the estimators are consistent and model selection consistent.